Automatic Compile-Time Parallelization of Logic Programs for Restricted, Goal Level, Independent and Parallelism
نویسندگان
چکیده
D> A framework for the automatic parallelization of (constraint) logic programs is proposed and proved correct. Intuitively, the parallelization process replaces conjunctions of literals with parallel expressions. Such expressions trigger at run-time the exploitation of restricted, goal-level, independent and-parallelism. The parallelization process performs two steps. The first one builds a conditional dependency graph (which can be simplified using compile-time analysis information), while the second transforms the resulting graph into linear conditional expressions, the parallel expressions of the &-Prolog language. Several heuristic algorithms for the latter ("annotation") process are proposed and proved correct. Algorithms are also given which determine if there is any loss of parallelism in the linearization process with respect to a proposed notion of maximal parallelism. Finally, a system is presented which implements the proposed approach. The performance of the different annotation algorithms is compared experimentally in this system by studying the time spent in parallelization and the effectiveness of the results in terms of speedups.
منابع مشابه
The DCG, UDG, and MEL Methods for Automatic Compile-time Parallelization of Logic Programs for Independent And-parallelism
There has been significant interest in parallel execution models for logic programs which exploit Independent And-Parallelism (IAP). In these models, it is necessary to determine which goals are independent and therefore eligible for parallel execution and which goals have to wait for which others during execution. Although this can be done at run-time, it can imply a very heavy overhead. In th...
متن کاملToward Advanced Symbolic Analysis
Bae, Hansang. M.S.E.C.E., Purdue University, May, 2003. Toward Advanced Symbolic Analysis. Major Professor: Rudolf Eigenmann. Automatic parallelization of programs at the loop level requires advanced program analysis techniques. The goal of these techniques is supporting other parallelization techniques by providing as much compile-time information as possible. Evaluation of symbolic expression...
متن کاملSupport for Thread-Level Speculation into OpenMP
– In-depth knowledge of the problem. – Understanding of the underlying architecture. – Knowledge on the parallel programming model. • OpenMP allows to parallelize code “avoiding” these requirements. • Compilers’ automatic parallelization only proceed when there is no risk. • Thread-Level Speculation (TLS) can extract parallelism when a compile-time dependence analysis can not guarantee that the...
متن کاملParallelizing Compilation through Load-Time Scheduling for a Superscalar Processor Family
Superscalar processors improve the execution time of sequential programs by exploiting instruction-level parallelism (ILP). The efficiency of parallelization at run-time can be increased through an additional scheduling phase for a concrete target machine in the compiler. But if the target machine is not known at compile-time, scheduling must be deferred to a later phase immediately before prog...
متن کاملNon-strict independence-based program parallelization using sharing and freeness information
The current ubiquity of multi-core processors has brought renewed interest in program parallelization. Logic programs allow studying the parallelization of programs with complex, dynamic data structures with (declarative) pointers in a comparatively simple semantic setting. In this context, automatic parallelizers which exploit and-parallelism rely on notions of independence in order to ensure ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- J. Log. Program.
دوره 38 شماره
صفحات -
تاریخ انتشار 1999